114 resultados para N content


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: The aim of the present study was to determine the relationship between the characteristics of general practices and the perceptions of the psychological content of consultations by GPs in those practices. Methods: A cross-sectional survey was conducted of all GPs (22 GPs based in nine practices) serving a discrete inner city community of 41 000 residents. GPs were asked to complete a log-diary over a period of five working days, rating their perception of the psychological content of each consultation on a 4-point Likert scale, ranging from 0 (no psychological content) to 3 (entirely psychological in content). The influence of GP and practice characteristics on psychological content scores was examined. Results: Data were available for every surgery-based consultation (n = 2206) conducted by all 22 participating GPs over the study period. The mean psychological content score was 0.58 (SD 0.33). Sixty-four percent of consultations were recorded as being without any psychological content; 6% were entirely psychological in content. Higher psychological content scores were significantly associated with younger GPs, training practices (n = 3), group practices (n = 4), the presence of on-site mental health workers (n = 5), higher antidepressant prescribing volumes and the achievement of vaccine and smear targets. Training status had the greatest predictive power, explaining 51% of the variation in psychological content. Neither practice consultation rates, GP list size, annual psychiatric referral rates nor volumes of benzodiazepine prescribing were related to psychological content scores. Conclusion: Increased awareness by GPs of the psychological dimension within a consultation may be a feature of the educational environment of training practices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When people monitor the rapid serial visual presentation (RSVP) of stimuli for two targets (T1 and T2), they often miss T2 if it falls into a time window of about half a second after T1 onset, a phenomenon known as the attentional blink (AB). We found that overall performance in an RSVP task was impaired by a concurrent short-term memory (STM) task and, furthermore, that this effect increased when STM load was higher and when its content was more task relevant. Loading visually defined stimuli and adding articulatory suppression further impaired performance on the RSVP task, but the size of the AB over time (i.e., T1-T2 lag) remained unaffected by load or content. This suggested that at least part of the performance in an RSVP task reflects interference between competing codes within STM, as interference models have held, whereas the AB proper reflects capacity limitations in the transfer to STM, as consolidation models have claimed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes a framework architecture for the automated re-purposing and efficient delivery of multimedia content stored in CMSs. It deploys specifically designed templates as well as adaptation rules based on a hierarchy of profiles to accommodate user, device and network requirements invoked as constraints in the adaptation process. The user profile provides information in accordance with the opt-in principle, while the device and network profiles provide the operational constraints such as for example resolution and bandwidth limitations. The profiles hierarchy ensures that the adaptation privileges the users' preferences. As part of the adaptation, we took into account the support for users' special needs, and therefore adopted a template-based approach that could simplify the adaptation process integrating accessibility-by-design in the template.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are still major challenges in the area of automatic indexing and retrieval of multimedia content data for very large multimedia content corpora. Current indexing and retrieval applications still use keywords to index multimedia content and those keywords usually do not provide any knowledge about the semantic content of the data. With the increasing amount of multimedia content, it is inefficient to continue with this approach. In this paper, we describe the project DREAM, which addresses such challenges by proposing a new framework for semi-automatic annotation and retrieval of multimedia based on the semantic content. The framework uses the Topic Map Technology, as a tool to model the knowledge automatically extracted from the multimedia content using an Automatic Labelling Engine. We describe how we acquire knowledge from the content and represent this knowledge using the support of NLP to automatically generate Topic Maps. The framework is described in the context of film post-production.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are still major challenges in the area of automatic indexing and retrieval of digital data. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. Research has been ongoing for a few years in the field of ontological engineering with the aim of using ontologies to add knowledge to information. In this paper we describe the architecture of a system designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automatic indexing and retrieval of digital data poses major challenges. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions, or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. For a number of years research has been ongoing in the field of ontological engineering with the aim of using ontologies to add such (meta) knowledge to information. In this paper, we describe the architecture of a system (Dynamic REtrieval Analysis and semantic metadata Management (DREAM)) designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval. The DREAM Demonstrator has been evaluated as deployed in the film post-production phase to support the process of storage, indexing and retrieval of large data sets of special effects video clips as an exemplar application domain. This paper provides its performance and usability results and highlights the scope for future enhancements of the DREAM architecture which has proven successful in its first and possibly most challenging proving ground, namely film production, where it is already in routine use within our test bed Partners' creative processes. (C) 2009 Published by Elsevier B.V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are three key driving forces behind the development of Internet Content Management Systems (CMS) - a desire to manage the explosion of content, a desire to provide structure and meaning to content in order to make it accessible, and a desire to work collaboratively to manipulate content in some meaningful way. Yet the traditional CMS has been unable to meet the latter of these requirements, often failing to provide sufficient tools for collaboration in a distributed context. Peer-to-Peer (P2P) systems are networks in which every node is an equal participant (whether transmitting data, exchanging content, or invoking services) and there is an absence of any centralised administrative or coordinating authorities. P2P systems are inherently more scalable than equivalent client-server implementations as they tend to use resources at the edge of the network much more effectively. This paper details the rationale and design of a P2P middleware for collaborative content management.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The THz water content index of a sample is defined and advantages in using such metric in estimating a sample's relative water content are discussed. The errors from reflectance measurements performed at two different THz frequencies using a quasi-optical null-balance reflectometer are propagated to the errors in estimating the sample water content index.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Information provision to address the changing requirements can be best supported by content management. The Current information technology enables information to be stored and provided from various distributed sources. To identify and retrieve relevant information requires effective mechanisms for information discovery and assembly. This paper presents a method, which enables the design of such mechanisms, with a set of techniques for articulating and profiling users' requirements, formulating information provision specifications, realising management of information content in repositories, and facilitating response to the user's requirements dynamically during the process of knowledge construction. These functions are represented in an ontology which integrates the capability of the mechanisms. The ontological modelling in this paper has adopted semiotics principles with embedded norms to ensure coherent course of actions represented in these mechanisms. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel framework referred to as collaterally confirmed labelling (CCL) is proposed, aiming at localising the visual semantics to regions of interest in images with textual keywords. Both the primary image and collateral textual modalities are exploited in a mutually co-referencing and complementary fashion. The collateral content and context-based knowledge is used to bias the mapping from the low-level region-based visual primitives to the high-level visual concepts defined in a visual vocabulary. We introduce the notion of collateral context, which is represented as a co-occurrence matrix of the visual keywords. A collaborative mapping scheme is devised using statistical methods like Gaussian distribution or Euclidean distance together with collateral content and context-driven inference mechanism. We introduce a novel high-level visual content descriptor that is devised for performing semantic-based image classification and retrieval. The proposed image feature vector model is fundamentally underpinned by the CCL framework. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval, respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicate that the proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we introduce a novel high-level visual content descriptor devised for performing semantic-based image classification and retrieval. The work can be treated as an attempt for bridging the so called "semantic gap". The proposed image feature vector model is fundamentally underpinned by an automatic image labelling framework, called Collaterally Cued Labelling (CCL), which incorporates the collateral knowledge extracted from the collateral texts accompanying the images with the state-of-the-art low-level visual feature extraction techniques for automatically assigning textual keywords to image regions. A subset of the Corel image collection was used for evaluating the proposed method. The experimental results indicate that our semantic-level visual content descriptors outperform both conventional visual and textual image feature models.