45 resultados para Incremental Information-content


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a framework architecture for the automated re-purposing and efficient delivery of multimedia content stored in CMSs. It deploys specifically designed templates as well as adaptation rules based on a hierarchy of profiles to accommodate user, device and network requirements invoked as constraints in the adaptation process. The user profile provides information in accordance with the opt-in principle, while the device and network profiles provide the operational constraints such as for example resolution and bandwidth limitations. The profiles hierarchy ensures that the adaptation privileges the users' preferences. As part of the adaptation, we took into account the support for users' special needs, and therefore adopted a template-based approach that could simplify the adaptation process integrating accessibility-by-design in the template.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are still major challenges in the area of automatic indexing and retrieval of digital data. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. Research has been ongoing for a few years in the field of ontological engineering with the aim of using ontologies to add knowledge to information. In this paper we describe the architecture of a system designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A large volume of visual content is inaccessible until effective and efficient indexing and retrieval of such data is achieved. In this paper, we introduce the DREAM system, which is a knowledge-assisted semantic-driven context-aware visual information retrieval system applied in the film post production domain. We mainly focus on the automatic labelling and topic map related aspects of the framework. The use of the context- related collateral knowledge, represented by a novel probabilistic based visual keyword co-occurrence matrix, had been proven effective via the experiments conducted during system evaluation. The automatically generated semantic labels were fed into the Topic Map Engine which can automatically construct ontological networks using Topic Maps technology, which dramatically enhances the indexing and retrieval performance of the system towards an even higher semantic level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatic indexing and retrieval of digital data poses major challenges. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions, or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. For a number of years research has been ongoing in the field of ontological engineering with the aim of using ontologies to add such (meta) knowledge to information. In this paper, we describe the architecture of a system (Dynamic REtrieval Analysis and semantic metadata Management (DREAM)) designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval. The DREAM Demonstrator has been evaluated as deployed in the film post-production phase to support the process of storage, indexing and retrieval of large data sets of special effects video clips as an exemplar application domain. This paper provides its performance and usability results and highlights the scope for future enhancements of the DREAM architecture which has proven successful in its first and possibly most challenging proving ground, namely film production, where it is already in routine use within our test bed Partners' creative processes. (C) 2009 Published by Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The quality of information provision influences considerably knowledge construction driven by individual users’ needs. In the design of information systems for e-learning, personal information requirements should be incorporated to determine a selection of suitable learning content, instructive sequencing for learning content, and effective presentation of learning content. This is considered as an important part of instructional design for a personalised information package. The current research reveals that there is a lack of means by which individual users’ information requirements can be effectively incorporated to support personal knowledge construction. This paper presents a method which enables an articulation of users’ requirements based on the rooted learning theories and requirements engineering paradigms. The user’s information requirements can be systematically encapsulated in a user profile (i.e. user requirements space), and further transformed onto instructional design specifications (i.e. information space). These two spaces allow the discovering of information requirements patterns for self-maintaining and self-adapting personalisation that enhance experience in the knowledge construction process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The valuation of farmland is a perennial issue for agricultural policy, given its importance in the farm investment portfolio. Despite the significance of farmland values to farmer wealth, prediction remains a difficult task. This study develops a dynamic information measure to examine the informational content of farmland values and farm income in explaining the distribution of farmland values over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although the use of climate scenarios for impact assessment has grown steadily since the 1990s, uptake of such information for adaptation is lagging by nearly a decade in terms of scientific output. Nonetheless, integration of climate risk information in development planning is now a priority for donor agencies because of the need to prepare for climate change impacts across different sectors and countries. This urgency stems from concerns that progress made against Millennium Development Goals (MDGs) could be threatened by anthropogenic climate change beyond 2015. Up to this time the human signal, though detectable and growing, will be a relatively small component of climate variability and change. This implies the need for a twin-track approach: on the one hand, vulnerability assessments of social and economic strategies for coping with present climate extremes and variability, and, on the other hand, development of climate forecast tools and scenarios to evaluate sector-specific, incremental changes in risk over the next few decades. This review starts by describing the climate outlook for the next couple of decades and the implications for adaptation assessments. We then review ways in which climate risk information is already being used in adaptation assessments and evaluate the strengths and weaknesses of three groups of techniques. Next we identify knowledge gaps and opportunities for improving the production and uptake of climate risk information for the 2020s. We assert that climate change scenarios can meet some, but not all, of the needs of adaptation planning. Even then, the choice of scenario technique must be matched to the intended application, taking into account local constraints of time, resources, human capacity and supporting infrastructure. We also show that much greater attention should be given to improving and critiquing models used for climate impact assessment, as standard practice. Finally, we highlight the over-arching need for the scientific community to provide more information and guidance on adapting to the risks of climate variability and change over nearer time horizons (i.e. the 2020s). Although the focus of the review is on information provision and uptake in developing regions, it is clear that many developed countries are facing the same challenges. Copyright © 2009 Royal Meteorological Society

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel technique for the noninvasive continuous measurement of leaf water content is presented. The technique is based on transmission measurements of terahertz radiation with a null-balance quasi-optical transmissometer operating at 94 GHz. A model for the propagation of terahertz radiation through leaves is presented. This, in conjunction with leaf thickness information determined separately, may be used to quantitatively relate transmittance measurements to leaf water content. Measurements using a dispersive Fourier transform spectrometer in the range of 100 GHz-500 GHz using Phormium tenax and Fatsia japonica leaves are also reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study was designed to determine the response of in vitro fermentation parameters to incremental levels of polyethylene glycol (PEG) when tanniniferous tree fruits (Dichrostachys cinerea, Acacia erioloba, A. erubiscens, A. nilotica and Piliostigma thonningii) were fermented using the Reading Pressure Technique. The trivalent ytterbium precipitable phenolics content of fruit substrates ranged from 175 g/kg DM in A. erubiscens to 607 g/kg DM in A. nilotica, while the soluble condensed tannin content ranged from 0.09 AU550nm/40mg in A. erioloba to 0.52 AU550nm/40 mg in D. cinerea. The ADF was highest in P. thonningii fruits (402 g/kg DM) and lowest in A. nilotica fruits (165 g/kg DM). Increasing the level of PEG caused an exponential rise to a maximum (asymptotic) for cumulative gas production, rate of gas production and nitrogen degradability in all substrates except P. thonningii fruits. Dry matter degradability for fruits containing higher levels of soluble condensed tannins (D. cinerea and P. thonningii), showed little response to incremental levels of PEG after incubation for 24 h. The minimum levels of PEG required to maximize in vitro fermentation of tree fruits was found to be 200 mg PEG/g DM of sample for all tree species except A. erubiscens fruits, which required 100 mg PEG/g DM sample. The study provides evidence that PEG levels lower than 1 g/g DM sample can be used for in vitro tannin bioassays to reduce the cost of evaluating non-conventional tanniniferous feedstuffs used in developing countries in the tropics and subtopics. The use of in vitro nitrogen degradability in place of the favoured dry matter degradability improved the accuracy of PEG as a diagnostic tool for tannins in in vitro fermentation systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Medication errors are common in primary care and are associated with considerable risk of patient harm. We tested whether a pharmacist-led, information technology-based intervention was more effective than simple feedback in reducing the number of patients at risk of measures related to hazardous prescribing and inadequate blood-test monitoring of medicines 6 months after the intervention. Methods: In this pragmatic, cluster randomised trial general practices in the UK were stratified by research site and list size, and randomly assigned by a web-based randomisation service in block sizes of two or four to one of two groups. The practices were allocated to either computer-generated simple feedback for at-risk patients (control) or a pharmacist-led information technology intervention (PINCER), composed of feedback, educational outreach, and dedicated support. The allocation was masked to general practices, patients, pharmacists, researchers, and statisticians. Primary outcomes were the proportions of patients at 6 months after the intervention who had had any of three clinically important errors: non-selective non-steroidal anti-inflammatory drugs (NSAIDs) prescribed to those with a history of peptic ulcer without co-prescription of a proton-pump inhibitor; β blockers prescribed to those with a history of asthma; long-term prescription of angiotensin converting enzyme (ACE) inhibitor or loop diuretics to those 75 years or older without assessment of urea and electrolytes in the preceding 15 months. The cost per error avoided was estimated by incremental cost-eff ectiveness analysis. This study is registered with Controlled-Trials.com, number ISRCTN21785299. Findings: 72 general practices with a combined list size of 480 942 patients were randomised. At 6 months’ follow-up, patients in the PINCER group were significantly less likely to have been prescribed a non-selective NSAID if they had a history of peptic ulcer without gastroprotection (OR 0∙58, 95% CI 0∙38–0∙89); a β blocker if they had asthma (0∙73, 0∙58–0∙91); or an ACE inhibitor or loop diuretic without appropriate monitoring (0∙51, 0∙34–0∙78). PINCER has a 95% probability of being cost eff ective if the decision-maker’s ceiling willingness to pay reaches £75 per error avoided at 6 months. Interpretation: The PINCER intervention is an effective method for reducing a range of medication errors in general practices with computerised clinical records. Funding: Patient Safety Research Portfolio, Department of Health, England.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud imagery is not currently used in numerical weather prediction (NWP) to extract the type of dynamical information that experienced forecasters have extracted subjectively for many years. For example, rapidly developing mid-latitude cyclones have characteristic signatures in the cloud imagery that are most fully appreciated from a sequence of images rather than from a single image. The Met Office is currently developing a technique to extract dynamical development information from satellite imagery using their full incremental 4D-Var (four-dimensional variational data assimilation) system. We investigate a simplified form of this technique in a fully nonlinear framework. We convert information on the vertical wind field, w(z), and profiles of temperature, T(z, t), and total water content, qt (z, t), as functions of height, z, and time, t, to a single brightness temperature by defining a 2D (vertical and time) variational assimilation testbed. The profiles of w, T and qt are updated using a simple vertical advection scheme. We define a basic cloud scheme to obtain the fractional cloud amount and, when combined with the temperature field, we convert this information into a brightness temperature, having developed a simple radiative transfer scheme. With the exception of some matrix inversion routines, all our code is developed from scratch. Throughout the development process we test all aspects of our 2D assimilation system, and then run identical twin experiments to try and recover information on the vertical velocity, from a sequence of observations of brightness temperature. This thesis contains a comprehensive description of our nonlinear models and assimilation system, and the first experimental results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The three decades of on-going executives’ concerns of how to achieve successful alignment between business and information technology shows the complexity of such a vital process. Most of the challenges of alignment are related to knowledge and organisational change and several researchers have introduced a number of mechanisms to address some of these challenges. However, these mechanisms pay less attention to multi-level effects, which results in a limited un-derstanding of alignment across levels. Therefore, we reviewed these challenges from a multi-level learning perspective and found that business and IT alignment is related to the balance of exploitation and exploration strategies with the intellec-tual content of individual, group and organisational levels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is increasing pressure to capture of video within Higher Education. Although much research has looked at how communication technologies enhance information transfer during playback of video, consideration of technical issues seems incongruous if we do not consider how presentation mode affects information assimilated by, and satisfaction of, learners with a range of individual differences, and from a range of different backgrounds. This paper considers whether a relationship exists between the media and presentation mode used in recorded content, and the level of information assimilation and satisfaction perceived by learners with a range of individual differences. Results aim to inform learning practitioners whether generic delivery is justified, or whether tailoring content delivery enhances the experience of specific learner groups.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction. Feature usage is a pre-requisite to realising the benefits of investments in feature rich systems. We propose that conceptualising the dependent variable 'system use' as 'level of use' and specifying it as a formative construct has greater value for measuring the post-adoption use of feature rich systems. We then validate the content of the construct as a first step in developing a research instrument to measure it. The context of our study is the post-adoption use of electronic medical records (EMR) by primary care physicians. Method. Initially, a literature review of the empirical context defines the scope based on prior studies. Having identified core features from the literature, they are further refined with the help of experts in a consensus seeking process that follows the Delphi technique. Results.The methodology was successfully applied to EMRs, which were selected as an example of feature rich systems. A review of EMR usage and regulatory standards provided the feature input for the first round of the Delphi process. A panel of experts then reached consensus after four rounds, identifying ten task-based features that would be indicators of level of use. Conclusions. To study why some users deploy more advanced features than others, theories of post-adoption require a rich formative dependent variable that measures level of use. We have demonstrated that a context sensitive literature review followed by refinement through a consensus seeking process is a suitable methodology to validate the content of this dependent variable. This is the first step of instrument development prior to statistical confirmation with a larger sample.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper employs a probit and a Markov switching model using information from the Conference Board Leading Indicator and other predictor variables to forecast the signs of future rental growth in four key U.S. commercial rent series. We find that both approaches have considerable power to predict changes in the direction of commercial rents up to two years ahead, exhibiting strong improvements over a naïve model, especially for the warehouse and apartment sectors. We find that while the Markov switching model appears to be more successful, it lags behind actual turnarounds in market outcomes whereas the probit is able to detect whether rental growth will be positive or negative several quarters ahead.