894 resultados para similarity retrieval
Resumo:
The difference between cirrus emissivities at 8 and 11 μm is sensitive to the mean effective ice crystal size of the cirrus cloud, De. By using single scattering properties of ice crystals shaped as planar polycrystals, diameters of up to about 70 μm can be retrieved, instead of up to 45 μm assuming spheres or hexagonal columns. The method described in this article is used for a global determination of mean effective ice crystal sizes of cirrus clouds from TOVS satellite observations. A sensitivity study of the De retrieval to uncertainties in hypotheses on ice crystal shape, size distributions, and temperature profiles, as well as in vertical and horizontal cloud heterogeneities shows that uncertainties can be as large as 30%. However, the TOVS data set is one of few data sets which provides global and long-term coverage. Having analyzed the years 1987–1991, it was found that measured effective ice crystal diameters De are stable from year to year. For 1990 a global median De of 53.5 μm was determined. Averages distinguishing ocean/land, season, and latitude lie between 23 μm in winter over Northern Hemisphere midlatitude land and 64 μm in the tropics. In general, larger Des are found in regions with higher atmospheric water vapor and for cirrus with a smaller effective emissivity.
Resumo:
Flow and turbulence above urban terrain is more complex than above rural terrain, due to the different momentum and heat transfer characteristics that are affected by the presence of buildings (e.g. pressure variations around buildings). The applicability of similarity theory (as developed over rural terrain) is tested using observations of flow from a sonic anemometer located at 190.3 m height in London, U.K. using about 6500 h of data. Turbulence statistics—dimensionless wind speed and temperature, standard deviations and correlation coefficients for momentum and heat transfer—were analysed in three ways. First, turbulence statistics were plotted as a function only of a local stability parameter z/Λ (where Λ is the local Obukhov length and z is the height above ground); the σ_i/u_* values (i = u, v, w) for neutral conditions are 2.3, 1.85 and 1.35 respectively, similar to canonical values. Second, analysis of urban mixed-layer formulations during daytime convective conditions over London was undertaken, showing that atmospheric turbulence at high altitude over large cities might not behave dissimilarly from that over rural terrain. Third, correlation coefficients for heat and momentum were analyzed with respect to local stability. The results give confidence in using the framework of local similarity for turbulence measured over London, and perhaps other cities. However, the following caveats for our data are worth noting: (i) the terrain is reasonably flat, (ii) building heights vary little over a large area, and (iii) the sensor height is above the mean roughness sublayer depth.
Resumo:
Cytenamide form I (R (3) over bar) undergoes a solid-state transformation upon heating to form II (P (1) over bar), with the structures exhibiting the same two-dimensional similarity that exists between the R (3) over bar and P (1) over bar forms of carbamazepine.
Resumo:
Carruthers' "mindreading is prior" model postulates one unitary mindreading mechanism working identically for self and other. While we agree about shared mindreading mechanisms, there is also evidence from neuroimaging and mentalizing about dissimilar others that suggest factors that differentially affect self-versus-other mentalizing. Such dissociations suggest greater complexity than the mindreading is prior model allows.
Resumo:
The artificial grammar (AG) learning literature (see, e.g., Mathews et al., 1989; Reber, 1967) has relied heavily on a single measure of implicitly acquired knowledge. Recent work comparing this measure (string classification) with a more indirect measure in which participants make liking ratings of novel stimuli (e.g., Manza & Bornstein, 1995; Newell & Bright, 2001) has shown that string classification (which we argue can be thought of as an explicit, rather than an implicit, measure of memory) gives rise to more explicit knowledge of the grammatical structure in learning strings and is more resilient to changes in surface features and processing between encoding and retrieval. We report data from two experiments that extend these findings. In Experiment 1, we showed that a divided attention manipulation (at retrieval) interfered with explicit retrieval of AG knowledge but did not interfere with implicit retrieval. In Experiment 2, we showed that forcing participants to respond within a very tight deadline resulted in the same asymmetric interference pattern between the tasks. In both experiments, we also showed that the type of information being retrieved influenced whether interference was observed. The results are discussed in terms of the relatively automatic nature of implicit retrieval and also with respect to the differences between analytic and nonanalytic processing (Whittlesea Price, 2001).
Resumo:
Background: Problems with lexical retrieval are common across all types of aphasia but certain word classes are thought to be more vulnerable in some aphasia types. Traditionally, verb retrieval problems have been considered characteristic of non-fluent aphasias but there is growing evidence that verb retrieval problems are also found in fluent aphasia. As verbs are retrieved from the mental lexicon with syntactic as well as phonological and semantic information, it is speculated that an improvement in verb retrieval should enhance communicative abilities in this population as in others. We report on an investigation into the effectiveness of verb treatment for three individuals with fluent aphasia. Methods & Procedures: Multiple pre-treatment baselines were established over 3 months in order to monitor language change before treatment. The three participants then received twice-weekly verb treatment over approximately 4 months. All pre-treatment assessments were administered immediately after treatment and 3 months post-treatment. Outcome & Results: Scores fluctuated in the pre-treatment period. Following treatment, there was a significant improvement in verb retrieval for two of the three participants on the treated items. The increase in scores for the third participant was statistically nonsignificant but post-treatment scores moved from below the normal range to within the normal range. All participants were significantly quicker in the verb retrieval task following treatment. There was an increase in well-formed sentences in the sentence construction test and in some samples of connected speech. Conclusions: Repeated systematic treatment can produce a significant improvement in verb retrieval of practised items and generalise to unpractised items for some participants. An increase in well-formed sentences is seen for some speakers. The theoretical and clinical implications of the results are discussed.
Resumo:
The aim of this study was to investigate the widely held, but largely untested, view that implicit memory (repetition priming) reflects an automatic form of retrieval. Specifically, in Experiment 1 we explored whether a secondary task (syllable monitoring), performed during retrieval, would disrupt performance on explicit (cued recall) and implicit (stem completion) memory tasks equally. Surprisingly, despite substantial memory and secondary costs to cued recall when performed with a syllable-monitoring task, the same manipulation had no effect on stem completion priming or on secondary task performance. In Experiment 2 we demonstrated that even when using a particularly demanding version of the stem completion task that incurred secondary task costs, the corresponding disruption to implicit memory performance was minimal. Collectively, the results are consistent with the view that implicit memory retrieval requires little or no processing capacity and is not seemingly susceptible to the effects of dividing attention at retrieval.
Resumo:
The feature model of immediate memory (Nairne, 1990) is applied to an experiment testing individual differences in phonological confusions amongst a group (N=100) of participants performing a verbal memory test. By simulating the performance of an equivalent number of “pseudo-participants” the model fits both the mean performance and the variability within the group. Experimental data show that high-performing individuals are significantly more likely to demonstrate phonological confusions than low performance individuals and this is also true of the model, despite the model’s lack of either an explicit phonological store or a performance-linked strategy shift away from phonological storage. It is concluded that a dedicated phonological store is not necessary to explain the basic phonological confusion effect, and the reduction in such an effect can also be explained without requiring a change in encoding or rehearsal strategy or the deployment of a different storage buffer.
Resumo:
There are still major challenges in the area of automatic indexing and retrieval of multimedia content data for very large multimedia content corpora. Current indexing and retrieval applications still use keywords to index multimedia content and those keywords usually do not provide any knowledge about the semantic content of the data. With the increasing amount of multimedia content, it is inefficient to continue with this approach. In this paper, we describe the project DREAM, which addresses such challenges by proposing a new framework for semi-automatic annotation and retrieval of multimedia based on the semantic content. The framework uses the Topic Map Technology, as a tool to model the knowledge automatically extracted from the multimedia content using an Automatic Labelling Engine. We describe how we acquire knowledge from the content and represent this knowledge using the support of NLP to automatically generate Topic Maps. The framework is described in the context of film post-production.
Resumo:
There are still major challenges in the area of automatic indexing and retrieval of digital data. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. Research has been ongoing for a few years in the field of ontological engineering with the aim of using ontologies to add knowledge to information. In this paper we describe the architecture of a system designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval.
Resumo:
A large volume of visual content is inaccessible until effective and efficient indexing and retrieval of such data is achieved. In this paper, we introduce the DREAM system, which is a knowledge-assisted semantic-driven context-aware visual information retrieval system applied in the film post production domain. We mainly focus on the automatic labelling and topic map related aspects of the framework. The use of the context- related collateral knowledge, represented by a novel probabilistic based visual keyword co-occurrence matrix, had been proven effective via the experiments conducted during system evaluation. The automatically generated semantic labels were fed into the Topic Map Engine which can automatically construct ontological networks using Topic Maps technology, which dramatically enhances the indexing and retrieval performance of the system towards an even higher semantic level.
Resumo:
In any data mining applications, automated text and text and image retrieval of information is needed. This becomes essential with the growth of the Internet and digital libraries. Our approach is based on the latent semantic indexing (LSI) and the corresponding term-by-document matrix suggested by Berry and his co-authors. Instead of using deterministic methods to find the required number of first "k" singular triplets, we propose a stochastic approach. First, we use Monte Carlo method to sample and to build much smaller size term-by-document matrix (e.g. we build k x k matrix) from where we then find the first "k" triplets using standard deterministic methods. Second, we investigate how we can reduce the problem to finding the "k"-largest eigenvalues using parallel Monte Carlo methods. We apply these methods to the initial matrix and also to the reduced one. The algorithms are running on a cluster of workstations under MPI and results of the experiments arising in textual retrieval of Web documents as well as comparison of the stochastic methods proposed are presented. (C) 2003 IMACS. Published by Elsevier Science B.V. All rights reserved.
Resumo:
Automatic indexing and retrieval of digital data poses major challenges. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions, or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. For a number of years research has been ongoing in the field of ontological engineering with the aim of using ontologies to add such (meta) knowledge to information. In this paper, we describe the architecture of a system (Dynamic REtrieval Analysis and semantic metadata Management (DREAM)) designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval. The DREAM Demonstrator has been evaluated as deployed in the film post-production phase to support the process of storage, indexing and retrieval of large data sets of special effects video clips as an exemplar application domain. This paper provides its performance and usability results and highlights the scope for future enhancements of the DREAM architecture which has proven successful in its first and possibly most challenging proving ground, namely film production, where it is already in routine use within our test bed Partners' creative processes. (C) 2009 Published by Elsevier B.V.
Resumo:
A novel framework referred to as collaterally confirmed labelling (CCL) is proposed, aiming at localising the visual semantics to regions of interest in images with textual keywords. Both the primary image and collateral textual modalities are exploited in a mutually co-referencing and complementary fashion. The collateral content and context-based knowledge is used to bias the mapping from the low-level region-based visual primitives to the high-level visual concepts defined in a visual vocabulary. We introduce the notion of collateral context, which is represented as a co-occurrence matrix of the visual keywords. A collaborative mapping scheme is devised using statistical methods like Gaussian distribution or Euclidean distance together with collateral content and context-driven inference mechanism. We introduce a novel high-level visual content descriptor that is devised for performing semantic-based image classification and retrieval. The proposed image feature vector model is fundamentally underpinned by the CCL framework. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval, respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicate that the proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models. (C) 2007 Elsevier B.V. All rights reserved.