913 resultados para Object naming
Resumo:
The ability to isolate a single sound source among concurrent sources and reverberant energy is necessary for understanding the auditory world. The precedence effect describes a related experimental finding, that when presented with identical sounds from two locations with a short onset asynchrony (on the order of milliseconds), listeners report a single source with a location dominated by the lead sound. Single-cell recordings in multiple animal models have indicated that there are low-level mechanisms that may contribute to the precedence effect, yet psychophysical studies in humans have provided evidence that top-down cognitive processes have a great deal of influence on the perception of simulated echoes. In the present study, event-related potentials evoked by click pairs at and around listeners' echo thresholds indicate that perception of the lead and lag sound as individual sources elicits a negativity between 100 and 250 msec, previously termed the object-related negativity (ORN). Even for physically identical stimuli, the ORN is evident when listeners report hearing, as compared with not hearing, a second sound source. These results define a neural mechanism related to the conscious perception of multiple auditory objects.
Resumo:
Gemstone Team Vision
Resumo:
Our ability to track an object as the same persisting entity over time and motion may primarily rely on spatiotemporal representations which encode some, but not all, of an object's features. Previous researchers using the 'object reviewing' paradigm have demonstrated that such representations can store featural information of well-learned stimuli such as letters and words at a highly abstract level. However, it is unknown whether these representations can also store purely episodic information (i.e. information obtained from a single, novel encounter) that does not correspond to pre-existing type-representations in long-term memory. Here, in an object-reviewing experiment with novel face images as stimuli, observers still produced reliable object-specific preview benefits in dynamic displays: a preview of a novel face on a specific object speeded the recognition of that particular face at a later point when it appeared again on the same object compared to when it reappeared on a different object (beyond display-wide priming), even when all objects moved to new positions in the intervening delay. This case study demonstrates that the mid-level visual representations which keep track of persisting identity over time--e.g. 'object files', in one popular framework can store not only abstract types from long-term memory, but also specific tokens from online visual experience.
Resumo:
The naming impairments in Alzheimer's disease (AD) have been attributed to a variety of cognitive processing deficits, including impairments in semantic memory, visual perception, and lexical access. To further understand the underlying biological basis of the naming failures in AD, the present investigation examined the relationship of various classes of naming errors to regional brain measures of cerebral glucose metabolism as measured with 18 F-Fluoro-2-deoxyglucose (FDG) and positron emission tomography (PET). Errors committed on a visual naming test were categorized according to a cognitive processing schema and then examined in relationship to metabolism within specific brain regions. The results revealed an association of semantic errors with glucose metabolism in the frontal and temporal regions. Language access errors, such as circumlocutions, and word blocking nonresponses were associated with decreased metabolism in areas within the left hemisphere. Visuoperceptive errors were related to right inferior parietal metabolic function. The findings suggest that specific brain areas mediate the perceptual, semantic, and lexical processing demands of visual naming and that visual naming problems in dementia are related to dysfunction in specific neural circuits.
Resumo:
A regularized algorithm for the recovery of band-limited signals from noisy data is described. The regularization is characterized by a single parameter. Iterative and non-iterative implementations of the algorithm are shown to have useful properties, the former offering the advantage of flexibility and the latter a potential for rapid data processing. Comparative results, using experimental data obtained in laser anemometry studies with a photon correlator, are presented both with and without regularization. © 1983 Taylor & Francis Ltd.
Resumo:
An analysis is carried out, using the prolate spheroidal wave functions, of certain regularized iterative and noniterative methods previously proposed for the achievement of object restoration (or, equivalently, spectral extrapolation) from noisy image data. The ill-posedness inherent in the problem is treated by means of a regularization parameter, and the analysis shows explicitly how the deleterious effects of the noise are then contained. The error in the object estimate is also assessed, and it is shown that the optimal choice for the regularization parameter depends on the signal-to-noise ratio. Numerical examples are used to demonstrate the performance of both unregularized and regularized procedures and also to show how, in the unregularized case, artefacts can be generated from pure noise. Finally, the relative error in the estimate is calculated as a function of the degree of superresolution demanded for reconstruction problems characterized by low space–bandwidth products.
Resumo:
In this paper we consider the problems of object restoration and image extrapolation, according to the regularization theory of improperly posed problems. In order to take into account the stochastic nature of the noise and to introduce the main concepts of information theory, great attention is devoted to the probabilistic methods of regularization. The kind of the restored continuity is investigated in detail; in particular we prove that, while the image extrapolation presents a Hölder type stability, the object restoration has only a logarithmic continuity. © 1979 American Institute of Physics.
Resumo:
info:eu-repo/semantics/published
Resumo:
We propose a new formulation of Miller's regularization theory, which is particularly suitable for object restoration problems. By means of simple geometrical arguments, we obtain upper and lower bounds for the errors on regularized solutions. This leads to distinguish between ' Holder continuity ' which is quite good for practical computations and ` logarithmic continuity ' which is very poor. However, in the latter case, one can reconstruct local weighted averages of the solution. This procedure allows for precise valuations of the resolution attainable in a given problem. Numerical computations, made for object restoration beyond the diffraction limit in Fourier optics, show that, when logarithmic continuity holds, the resolution is practically independent of the data noise level. © 1980 Taylor & Francis Group, LLC.
Resumo:
Many code generation tools exist to aid developers in carrying out common mappings, such as from Object to XML or from Object to relational database. Such generated code tends to possess a high binding between the Object code and the target mapping, making integration into a broader application tedious or even impossible. In this paper we suggest XML technologies and the multiple inheritance capabilities of interface based languages such as Java, offer a means to unify such executable specifications, thus building complete, consistent and useful object models declaratively, without sacrificing component flexibility.
Resumo:
The UK government started the UK eUniversities project in order to create a virtual campus for online education provisions, competing in a global market. The UKeU (WWW.ukeu.com) claims to "have created a new approach to e-learning" which "opens up a range of exciting opportunities for students, business and industry worldwide" to obtain both postgraduate and undergraduate qualifications. Although there has been many promises about the e-learning revolution using state-of-the-art multimedia technology, closer scrutiny of what is being delivered reveals that many of the e-learning models currently being used are little more than the old text based computer aided learning running on a global network. As part of the UKeU project a consortium of universities have been involved in developing a two year foundation degree from 2004. We look at the approach taken by the consortium in developing global e-learning provisions and the problems and the pitfalls that lay ahead.